58 research outputs found

    On the Semantics of Gringo

    Full text link
    Input languages of answer set solvers are based on the mathematically simple concept of a stable model. But many useful constructs available in these languages, including local variables, conditional literals, and aggregates, cannot be easily explained in terms of stable models in the sense of the original definition of this concept and its straightforward generalizations. Manuals written by designers of answer set solvers usually explain such constructs using examples and informal comments that appeal to the user's intuition, without references to any precise semantics. We propose to approach the problem of defining the semantics of gringo programs by translating them into the language of infinitary propositional formulas. This semantics allows us to study equivalent transformations of gringo programs using natural deduction in infinitary propositional logic.Comment: Proceedings of Answer Set Programming and Other Computing Paradigms (ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turke

    SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning

    Full text link
    Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability. Interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style DRL approach and helps the RL practitioners to understand the high-level behavior of the system better. In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning. The task-level interpretability is enabled by relating symbolic actions to options.This framework features a planner -- controller -- meta-controller architecture, which takes charge of subtask scheduling, data-driven subtask learning, and subtask evaluation, respectively. The three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks, bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input. Experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches

    Representing First-Order Causal Theories by Logic Programs

    Get PDF
    Nonmonotonic causal logic, introduced by McCain and Turner (McCain, N. and Turner, H. 1997. Causal theories of action and change. In Proceedings of National Conference on Artificial Intelligence (AAAI), Stanford, CA, 460–465) became the basis for the semantics of several expressive action languages. McCain\u27s embedding of definite propositional causal theories into logic programming paved the way to the use of answer set solvers for answering queries about actions described in such languages. In this paper we extend this embedding to nondefinite theories and to the first-order causal logic

    Representing First-Order Causal Theories by Logic Programs

    Get PDF
    Nonmonotonic causal logic, introduced by Norman McCain and Hudson Turner, became a basis for the semantics of several expressive action languages. McCain's embedding of definite propositional causal theories into logic programming paved the way to the use of answer set solvers for answering queries about actions described in such languages. In this paper we extend this embedding to nondefinite theories and to first-order causal logic.Comment: 29 pages. To appear in Theory and Practice of Logic Programming (TPLP); Theory and Practice of Logic Programming, May, 201

    SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning

    Full text link
    We present SeaEval, a benchmark for multilingual foundation models. In addition to characterizing how these models understand and reason with natural language, we also investigate how well they comprehend cultural practices, nuances, and values. Alongside standard accuracy metrics, we investigate the brittleness of foundation models in the dimensions of semantics and multilinguality. Our analyses span both open-sourced and closed models, leading to empirical results across classic NLP tasks, reasoning, and cultural comprehension. Key findings indicate (1) Most models exhibit varied behavior when given paraphrased instructions. (2) Many models still suffer from exposure bias (e.g., positional bias, majority label bias). (3) For questions rooted in factual, scientific, and commonsense knowledge, consistent responses are expected across multilingual queries that are semantically equivalent. Yet, most models surprisingly demonstrate inconsistent performance on these queries. (4) Multilingually-trained models have not attained "balanced multilingual" capabilities. Our endeavors underscore the need for more generalizable semantic representations and enhanced multilingual contextualization. SeaEval can serve as a launchpad for more thorough investigations and evaluations for multilingual and multicultural scenarios.Comment: 15 pages, 7 figure
    • …
    corecore